Dang Bich Thuy Le - Chitvan Bharatkumar Patel¶

Style Transfer, Yolo, Embeddings¶

In this lab, you will work on the following concepts.

  • Style transfer
  • Yolo
  • Embeddings
In [2]:
# portions of this lab were taken from Deep Learning with Python

# !pip3 install tqdm
import glob
import os
import random
import shutil

import tensorflow as tf
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import *
from tensorflow.keras import regularizers

import matplotlib.pyplot as plt
%matplotlib inline

import pandas as pd

import numpy as np
from random import shuffle
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.preprocessing import MinMaxScaler

print(tf.__version__)
print(tf.config.list_physical_devices('GPU'))
2.9.1
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

1. Neural Style Transfer¶

Execute the neural style transfer algorithm on an Santa Clara University image to generate a GIF of the different generations of the picture.

https://keras.io/examples/generative/neural_style_transfer/

Example of style images are provided in the styles directory. These images usually produce good results in style transfer.

rio-de-janeiro.jpeg

With neural style transfer.

rj_rf_at_iteration_4.png

Read the code from Keras, execute it to generate images, and explain in your words how Neural Style Transfer works. You should utilize ChatGPT or equivalent to help you write the summary. ¶

Neural style transfer involves using a convolutional neural network to extract the content and style information from two input images, and then optimizing an output image to match the content statistics of the content image and the style statistics of the style reference image. The resulting image appears to be a combination of the two original images, with the content of the content image and the style of the style reference image.

We use the intermediate layers of the convolutional neural network to get the content and style representations of the image. From the network's input layer, the first few layer activations represent low-level features like edges and textures. As stepping through the network, the final few layers represent higher-level features—object parts

In the code from Keras, VGG19 network architecture, which is a pretrained image classification network was used. These intermediate layers are necessary to define the representation of content and style from the images. For an input image, we try to match the corresponding style and content target representations at these intermediate layers. The output image is gradually refined through multiple iterations until the loss is minimized and the desired style transfer effect is achieved.

The optimization process involves minimizing a loss function that combines the content loss and the style loss, and total variation loss which are calculated based on the feature maps extracted from the neural network. The style loss is defined using a deep convolutional neural network, consisting in a sum of L2 distances between the Gram matrices of the representations of the base image and the style reference image, extracted from different layers of a VGG19. It is to capture color/texture information at different spatial scales. The content loss is a L2 distance between the features of the base image (extracted from a deep layer) and the features of the combination image, keeping the generated image close enough to the original one.

The process of Neural Style Transfer:

Input Images: select the content image and the style image. The content image is typically a photograph, while the style image can be a painting or other type of artwork.

Preprocessing: Both the content and style images are preprocessed before being used in the neural network. This typically involves resizing the images to a standard size and normalizing the pixel values.

Feature Extraction: The next step is to extract the content and style features from the preprocessed images using a convolutional neural network, typically a pre-trained model such as VGG-19 or ResNet-50, which has been trained on a large dataset of images and has learned to extract features such as edges, shapes, and textures.

Style and Content Loss: Once the features have been extracted, the style and content information can be compared between the two images. This is done by calculating the style and content loss. The style loss is calculated by comparing the correlation of the features across different layers of the network between the style image and the generated image by calculating a Gram matrix that includes this information by taking the outer product of the feature vector with itself at each location, and averaging that outer product over all locations, while the content loss is calculated by comparing the feature maps of the content image and the generated image.

Optimization: The final step is to optimize the generated image by minimizing the total loss, which is a combination of the style and content loss. This is done using an optimization algorithm such as gradient descent, which iteratively adjusts the pixel values of the generated image to minimize the loss.

In [28]:
# wE trained in different computer and save out intermediate images. 
# Below code is to generate gif file from intermediate images.
# Image 1
import imageio.v2 as imageio
import os
images = []
file_path = 'scu_style_transfer_1/'
filenames = sorted(os.listdir(file_path))
with imageio.get_writer('scu_style_transfer_1.gif', mode='I', duration=0.08) as writer:
    for filename in filenames:
        image = imageio.imread(file_path + filename)
        writer.append_data(image)
In [29]:
# Image 2
images = []
file_path = 'scu_style_transfer_2/'
filenames = sorted(os.listdir(file_path))
with imageio.get_writer('scu_style_transfer_2.gif', mode='I', duration=0.08) as writer:
    for filename in filenames:
        image = imageio.imread(file_path + filename)
        writer.append_data(image)
In [4]:
# Show style image and two GIF images
from IPython.display import Image, display
base_image_paths = ['scu_style_transfer_2.gif', 'scu_style_transfer_1.gif']
style_reference_image_path = '9ooB60I.jpg'
display(Image(style_reference_image_path))
# for base_image_path in base_image_paths:
#     display(Image(base_image_path))
    

scu_style_transfer_1.gif

scu_style_transfer_2.gif

2. YOLO - Detecting Bananas in Your Hand¶

In this part of the lab, you will add YOLO to a video stream as given below using cv2 to detect bananas in a video stream.

To install OpenCV, type:

pip3 install opencv-python

2.1. Training YOLO To Detect Bananas¶

You will implement YOLO to detect bananas in images.

2.png

Resources for YOLO can be found in:

https://machinelearningmastery.com/how-to-perform-object-detection-with-yolov3-in-keras/

https://www.analyticsvidhya.com/blog/2018/12/practical-guide-object-detection-yolo-framewor-python/

https://www.analyticsvidhya.com/blog/2018/10/a-step-by-step-introduction-to-the-basic-object-detection-algorithms-part-1/

https://medium.com/@enriqueav/object-detection-with-yolo-implementations-and-how-to-use-them-5da928356035

https://github.com/experiencor/keras-yolo2

The directory bananas-detection contains two directories, bananas-train and bananas_train and bananas_val, and label.csv contains the name of the file and the enclosing box for banana in the image.

In [61]:
import pandas as pd
df = pd.read_csv('banana-detection/bananas_val/label.csv')
df
Out[61]:
img_name label xmin ymin xmax ymax
0 0.png 0 183 63 241 112
1 1.png 0 26 86 79 133
2 2.png 0 139 108 178 148
3 3.png 0 20 130 63 170
4 4.png 0 30 103 98 152
... ... ... ... ... ... ...
95 95.png 0 101 191 142 231
96 96.png 0 78 141 130 203
97 97.png 0 79 59 129 123
98 98.png 0 167 42 205 89
99 99.png 0 190 75 239 117

100 rows × 6 columns

Explain in your words how Yolo works. You should utilize ChatGPT or equivalent to help you write the summary.

YOLO (You Only Look Once) is a real-time object detection in images and videos algorithm that works by dividing the input image into a grid of cells and predicting the presence of objects within each cell. Typically, the grid size is 19x19 or 13x13. For each cell, YOLO predicts a set of bounding boxes that potentially contain objects, using a deep convolutional neural network to make these predictions. The network is trained on a large dataset of images with annotated objects and learns to recognize objects by analyzing their visual features. Each bounding box is represented by five values: (x, y, w, h, confidence). For each bounding box, YOLO calculates an objectness score, which represents the probability that the bounding box contains an object. For each bounding box having a high objectness score, YOLO predicts the class of the object within the box. Then YOLO uses non-max suppression to eliminate duplicate detections of the same object. The algorithm compares the objectness scores of all bounding boxes that overlap and removes those with lower scores. The final output of YOLO is a list of bounding boxes, each with a class label and corresponding confidence score.

2.2. Using CV2¶

In the second part of the lab, you will use OpenCV to detect the position of a banana on a video stream.

As we can't do a video stream, we created a video by combining all image in validation folder. Then using the trained model to detect banana in this video.

In [14]:
# Create a video from all images in banana_val
import cv2
import os

image_folder = 'banana-detection/bananas_val/images/'
video_name = 'banana_val.avi'

images = [img for img in os.listdir(image_folder) if img.endswith(".png")]
frame = cv2.imread(os.path.join(image_folder, images[0]))
height, width, layers = frame.shape

video = cv2.VideoWriter(video_name, 0, 1, (width,height))

for image in images:
    video.write(cv2.imread(os.path.join(image_folder, image)))

cv2.destroyAllWindows()
video.release()
In [ ]:
# train on colab the Yolo network on bigger machine
# save weights and any other configuration parameters
In [1]:
import cv2
import torch
from numpy import random
from models.experimental import attempt_load
from utils.datasets import LoadImages
from utils.general import check_img_size, non_max_suppression, scale_coords, xyxy2xywh, increment_path
from utils.plots import plot_one_box
from utils.torch_utils import select_device


def detect(source, weights=, save_img=True, imgsz=640, project='runs/detect', device='', conf_thres=0.25, iou_thres=0.45):
    # Directories
    save_dir = Path(increment_path(Path(project) / 'exp', exist_ok=False))  # increment run
    (save_dir).mkdir(parents=True, exist_ok=True)  # make dir

    # Initialize
    device = select_device(device)

    # Load model
    model = attempt_load(weights, map_location=device)  # load FP32 model
    stride = int(model.stride.max())  # model stride
    imgsz = check_img_size(imgsz, s=stride)  # check img_size

    # Set Dataloader
    vid_path, vid_writer = None, None

    dataset = LoadImages(source, img_size=imgsz, stride=stride)

    # Get names and colors
    names = model.module.names if hasattr(model, 'module') else model.names
    colors = [[random.randint(0, 255) for _ in range(3)] for _ in names]

    # Run inference
    if device.type != 'cpu':
        model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters())))  # run once
    old_img_w = old_img_h = imgsz
    old_img_b = 1

    for path, img, im0s, vid_cap in dataset:
        img = torch.from_numpy(img).to(device)
        img = img.float()
        img /= 256.0 
        if img.ndimension() == 3:
            img = img.unsqueeze(0)

        # Inference
        with torch.no_grad():   # Calculating gradients would cause a GPU memory leak
            pred = model(img, augment=False)[0]

        # Apply NMS
#         print('opt.iou_thres', conf_thres)
        pred = non_max_suppression(pred, conf_thres, iou_thres)

        # Process detections
        for i, det in enumerate(pred):  # detections per image

            p, s, im0, frame = path, '', im0s, getattr(dataset, 'frame', 0)

            p = Path(p)  # to Path
            save_path = str(save_dir / p.name)  # img.jpg
            txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}')  # img.txt
            gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]  # normalization gain whwh
            if len(det):
                # Rescale boxes from img_size to im0 size
                det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()

                # Print results
                for c in det[:, -1].unique():
                    n = (det[:, -1] == c).sum()  # detections per class
                    s += f"{n} {names[int(c)]}{'s' * (n > 1)}, "  # add to string

                # Write results
                for *xyxy, conf, cls in reversed(det):
                    label = f'{names[int(cls)]} {conf:.2f}'
                    plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=1)

            # Save results (image with detections)
            if save_img:
                if dataset.mode == 'image':
                    cv2.imwrite(save_path, im0)
#                     print(f" The image with the result is saved in: {save_path}")
                else:  # 'video' or 'stream'
                    if vid_path != save_path:  # new video
                        vid_path = save_path
                        if isinstance(vid_writer, cv2.VideoWriter):
                            vid_writer.release()  # release previous video writer
                        if vid_cap:  # video
                            fps = vid_cap.get(cv2.CAP_PROP_FPS)
                            w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
                            h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

                        vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
                    vid_writer.write(im0)

with torch.no_grad():
    detect(source='banana_val.avi', weights='yolo_banana.pt')
Fusing layers... 
IDetect.fuse
/home/tt/miniconda3/envs/yolov7/lib/python3.9/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2228.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
In [2]:
import cv2

vc = cv2.VideoCapture(0)

while True:
    ret, frame = vc.read()
    h, w, _ = frame.shape
    img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
    img = cv2.resize(img, (256, 256))
    
    img_to_deep_learning = img

    # Add your code here to add YOLO

    img_mean = np.array([127, 127, 127])
    img = (img - img_mean) / 128
    
    img = np.transpose(img, [2, 0, 1])
    img = np.expand_dims(img, axis=0)
    img = img.astype(np.float32)
    
    cv2.imshow("Video", frame)
  
    if cv2.waitKey(1) & 0xFF == ord("q"): break

vc.release()
cv2.destroyAllWindows()

As part of the submission results, you will submit a video of banana detection.

3. Word2Vec¶

Let's first download a word2vec dictionary with a vector dimension 300 for each word from Google News.

In [320]:
import gensim.downloader as api
In [321]:
wv = api.load("word2vec-google-news-300")
In [322]:
king = wv["king"]

king.shape
Out[322]:
(300,)

Because each word is represented as a vector in the space $\cal{R}^{300}$, we can compute similarity between words.

In [323]:
print(wv.most_similar(positive=["king", "queen", 'royal'], topn=10))
[('monarch', 0.7630465626716614), ('prince', 0.7122635841369629), ('princess', 0.6952192783355713), ('royals', 0.691109836101532), ('princes', 0.6675854325294495), ('kings', 0.6575640439987183), ('queens', 0.6341844797134399), ('crown_prince', 0.6330040693283081), ('Queen_Consort', 0.6233131289482117), ('NYC_anglophiles_aflutter', 0.6210921406745911)]

In this case, you can simply compute the cosine distance between two words.

In [324]:
print(wv.similarity("suv", "car" ))
0.6054363

And you can compute words that match vector operations such as $\tt{vec(king)} - \tt{vec(man)} + \tt{vec(woman)} = ?$

In [325]:
print(wv.most_similar(positive=["king", "queen"], negative=["man"]))
[('queens', 0.595018744468689), ('monarch', 0.5815044641494751), ('kings', 0.5612992644309998), ('royal', 0.5204525589942932), ('princess', 0.5191516876220703), ('princes', 0.5086392164230347), ('NYC_anglophiles_aflutter', 0.5057314038276672), ('Queen_Consort', 0.49256712198257446), ('Queen', 0.4822567403316498), ('royals', 0.4781743586063385)]

Because every word is represented by a vector, at the end a ML model will only be able to represent vectors or differences in vectors as represented by the embedding. If the training set contains bias, the representation of the vectors will be as good as the training set.

In [326]:
print(wv.most_similar(positive=["king", "serve"], negative=["rule"]))
[('served', 0.43512797355651855), ('serving', 0.4316672086715698), ('Serving', 0.4261252284049988), ('serves', 0.42303547263145447), ('queen', 0.3676721751689911), ('kings', 0.3305252492427826), ('Sir_Francis_Walsingham', 0.32565033435821533), ('Sow_Tracey_Ullman', 0.32548266649246216), ('Centrepoint_patron', 0.3240561783313751), ('Serves', 0.3230326473712921)]

Let's see what's happening here by visualizing these 4 points.

In [328]:
king = wv['king']
queen = wv['queen']
serve = wv['serve']
rule = wv['rule']

labels = ['king', 'queen', 'serve', 'rule']
x = np.array([king, queen, serve, rule])

tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300)
tsne_results = tsne.fit_transform(x)

print(tsne_results)

for label, coord in zip(labels, tsne_results):
    plt.plot(coord[0], coord[1], '*', label=label)
plt.legend()
plt.show()
[t-SNE] Computing 3 nearest neighbors...
[t-SNE] Indexed 4 samples in 0.000s...
[t-SNE] Computed neighbors for 4 samples in 0.000s...
[t-SNE] Computed conditional probabilities for sample 4 / 4
[t-SNE] Mean sigma: 1125899906842624.000000
[t-SNE] KL divergence after 250 iterations with early exaggeration: 33.158478
[t-SNE] KL divergence after 300 iterations: 0.052353
[[-106.9377     181.19319  ]
 [-379.79224    -69.226204 ]
 [ 208.99045      2.0675533]
 [ -53.837013  -261.69492  ]]
/home/tt/miniconda3/lib/python3.9/site-packages/sklearn/manifold/_t_sne.py:795: FutureWarning: The default initialization in TSNE will change from 'random' to 'pca' in 1.2.
  warnings.warn(
/home/tt/miniconda3/lib/python3.9/site-packages/sklearn/manifold/_t_sne.py:805: FutureWarning: The default learning rate in TSNE will change from 200.0 to 'auto' in 1.2.
  warnings.warn(

Why is it important to remove bias from the embeddings?

Imagine in a situation where you are analyzing if you are going to give loan to a person based on the difference of two embeddings $\sum (v_1 - v_2)$, and suppose that the same difference appears in $\sum (v_3 - v_4)$.

For example, $v_1$ could be derived a sentence implying the person has good credit, and $v_2$ could be derived from a sentence where the person has no outstanding loans, whereas $v_3$ could represent a person's race, and $v_4$ could be derived from the person not paying the loan.

In our system, we will give a score based on the following result $v_1 - v_2 + v_3$ (very simple logistic classifier). The answer to the system could be a score that represents the person will pay the loan or the person will not pay the loan. What will happen in this case? If we don't remove bias from embeddings, the system may give score that bias on the race. When we remove bias, the answer to the system that the person will pay the loan or will not pay the loan will be more balance with the race

4. Using Embeddings in Gradient Boosting¶

In this lab, you will use categorical embeddings to improve gradient boosting techniques (they are quite popular to win Kaggle competitions).

Suppose you have a table where one of the fields is day of the week (Monday = 0, Tuesday = 1, etc).

Several techniques exist to encode these variables, one-hot, using the average of the output label, etc.

Idea in this case is to use an auxiliary model with embeddings for the categorical variables, use the model to predict the output variable, and then transfer the values of the embeddings to the a new table, where you will run a gradient boosting algorithm.

Part of this lab can be found in the following page.

https://towardsdatascience.com/deep-embeddings-for-categorical-variables-cat2vec-b05c8ab63ac0

In [238]:
df = pd.read_csv('bike_sharing_daily.csv')
df
Out[238]:
instant dteday season yr mnth holiday weekday workingday weathersit temp atemp hum windspeed casual registered cnt
0 1 2011-01-01 1 0 1 0 6 0 2 0.344167 0.363625 0.805833 0.160446 331 654 985
1 2 2011-01-02 1 0 1 0 0 0 2 0.363478 0.353739 0.696087 0.248539 131 670 801
2 3 2011-01-03 1 0 1 0 1 1 1 0.196364 0.189405 0.437273 0.248309 120 1229 1349
3 4 2011-01-04 1 0 1 0 2 1 1 0.200000 0.212122 0.590435 0.160296 108 1454 1562
4 5 2011-01-05 1 0 1 0 3 1 1 0.226957 0.229270 0.436957 0.186900 82 1518 1600
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
726 727 2012-12-27 1 1 12 0 4 1 2 0.254167 0.226642 0.652917 0.350133 247 1867 2114
727 728 2012-12-28 1 1 12 0 5 1 2 0.253333 0.255046 0.590000 0.155471 644 2451 3095
728 729 2012-12-29 1 1 12 0 6 0 2 0.253333 0.242400 0.752917 0.124383 159 1182 1341
729 730 2012-12-30 1 1 12 0 0 0 1 0.255833 0.231700 0.483333 0.350754 364 1432 1796
730 731 2012-12-31 1 1 12 0 1 1 2 0.215833 0.223487 0.577500 0.154846 439 2290 2729

731 rows × 16 columns

In [239]:
plt.hist(df.cnt.values, bins=20)
plt.show()

You will answer the following questions. This is related to the paper Cat2Vec (found at https://openreview.net/forum?id=HyNxRZ9xg).

4.1. In this table, which variables are categorical?¶

In this table, season, yr, mnth, holiday, weekday, workingday, weathersit are categorical variables

4.2. How many embeddings do you need to create?¶

We can choose the variables to do embeddings. Here, we choose all except working day as working day and weekday are related.

4.3. What are the embeddings size?¶

Jeremy Howard suggested that the size of hte embedding vector should be $\tt{embedding\_size} = \min(50, (m+1) / 2)$, where $m$ is the number of categories.

Here, we used same as your suggestion: min(50, (n_values+1) // 2) + 1 which are 4, 2, 8, 2, 5, 3 for season, yr, mnth, holiday, weekday, weathersit, respectively

4.4. Train embeddings model¶

Train a small model with a number of embeddings, concatenated, and one dense layer, trying to predict the variable cnt.

xl = []
for i in range(number_of_categorical_variables):
    x = Embedding(...)(input[i]) # (NI, NO)
    xl.append(x)
x = Concatente()(xl)
x = Dense(1)(x)

4.5. What are the sizes of the embedding matrices?¶

season: (NI, 4) yr: (NI, 2) mnth: (NI, 8) holiday: (NI, 2) weekday: (NI, 5) weathersit: (NI, 3)

We create an matrices of size of (NI, 24)

4.6. Create a new table with the values and embedded values¶

4.7. What's Gradient Boosting?¶

Gradient boosting is a machine learning technique used for both regression and classification problems. It works by building an ensemble of weak learners, which are typically decision trees, and combining their predictions to make a stronger overall prediction. This involves adding new weak learners to the ensemble, and adjusting the weights of existing learners.

4.8. Use Gradient Boosting (xgboost or catboost) to train the model with the new table¶

In [272]:
# add variables you will choose to create embeddings for
df_emb_vars = [
    'season',
    'yr',
    'mnth',
    'holiday',
    'weekday',
#     'workingday',
    'weathersit'
]
In [273]:
x = xi = Input((len(df_emb_vars),))
xl = []
for i in range(len(df_emb_vars)):
    v_min = df[df_emb_vars[i]].min()
    v_max = df[df_emb_vars[i]].max()
    n_values = v_max + 1
    output_dim = min(50, (n_values+1) // 2) + 1
    x = Embedding(
        n_values,
        output_dim, 
        input_length=1,
        name=df_emb_vars[i]
    )(xi[..., i]) # (NI, NO)
    xl.append(x)
x = Concatenate()(xl)
x = Dense(1)(x)

m = tf.keras.models.Model(xi, x)
m.compile(loss='mae', optimizer=tf.keras.optimizers.Adam(0.001))
In [274]:
m.summary()
Model: "model_27"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 input_32 (InputLayer)          [(None, 6)]          0           []                               
                                                                                                  
 tf.__operators__.getitem_107 (  (None,)             0           ['input_32[0][0]']               
 SlicingOpLambda)                                                                                 
                                                                                                  
 tf.__operators__.getitem_108 (  (None,)             0           ['input_32[0][0]']               
 SlicingOpLambda)                                                                                 
                                                                                                  
 tf.__operators__.getitem_109 (  (None,)             0           ['input_32[0][0]']               
 SlicingOpLambda)                                                                                 
                                                                                                  
 tf.__operators__.getitem_110 (  (None,)             0           ['input_32[0][0]']               
 SlicingOpLambda)                                                                                 
                                                                                                  
 tf.__operators__.getitem_111 (  (None,)             0           ['input_32[0][0]']               
 SlicingOpLambda)                                                                                 
                                                                                                  
 tf.__operators__.getitem_112 (  (None,)             0           ['input_32[0][0]']               
 SlicingOpLambda)                                                                                 
                                                                                                  
 season (Embedding)             (None, 4)            20          ['tf.__operators__.getitem_107[0]
                                                                 [0]']                            
                                                                                                  
 yr (Embedding)                 (None, 2)            4           ['tf.__operators__.getitem_108[0]
                                                                 [0]']                            
                                                                                                  
 mnth (Embedding)               (None, 8)            104         ['tf.__operators__.getitem_109[0]
                                                                 [0]']                            
                                                                                                  
 holiday (Embedding)            (None, 2)            4           ['tf.__operators__.getitem_110[0]
                                                                 [0]']                            
                                                                                                  
 weekday (Embedding)            (None, 5)            35          ['tf.__operators__.getitem_111[0]
                                                                 [0]']                            
                                                                                                  
 weathersit (Embedding)         (None, 3)            12          ['tf.__operators__.getitem_112[0]
                                                                 [0]']                            
                                                                                                  
 concatenate_21 (Concatenate)   (None, 24)           0           ['season[0][0]',                 
                                                                  'yr[0][0]',                     
                                                                  'mnth[0][0]',                   
                                                                  'holiday[0][0]',                
                                                                  'weekday[0][0]',                
                                                                  'weathersit[0][0]']             
                                                                                                  
 dense_52 (Dense)               (None, 1)            25          ['concatenate_21[0][0]']         
                                                                                                  
==================================================================================================
Total params: 204
Trainable params: 204
Non-trainable params: 0
__________________________________________________________________________________________________
In [275]:
x = df[df_emb_vars].values
y = df.cnt.values

train_size = int(0.9 * len(x))

# split train and test sets
x_train = x[:train_size].copy()
x_test = x[train_size:].copy()

y_train = y[:train_size].copy()
y_test = y[train_size:].copy()
In [276]:
scaler = MinMaxScaler()
df['cnt_Scaled'] = scaler.fit_transform(df[['cnt']])
In [277]:
# scale train and test data
y_train_s = scaler.fit_transform(y_train.reshape(-1, 1))
y_test_s = scaler.fit_transform(y_test.reshape(-1,1))
In [278]:
# train model
embed_history = m.fit(x_train,
                        y_train_s,
                        validation_data = (x_test, y_test_s),
                        batch_size=1024,
                        epochs=350)
Epoch 1/350
1/1 [==============================] - 0s 425ms/step - loss: 0.5373 - val_loss: 0.6003
Epoch 2/350
1/1 [==============================] - 0s 12ms/step - loss: 0.5304 - val_loss: 0.5934
Epoch 3/350
1/1 [==============================] - 0s 12ms/step - loss: 0.5235 - val_loss: 0.5865
Epoch 4/350
1/1 [==============================] - 0s 13ms/step - loss: 0.5166 - val_loss: 0.5797
Epoch 5/350
1/1 [==============================] - 0s 13ms/step - loss: 0.5097 - val_loss: 0.5728
Epoch 6/350
1/1 [==============================] - 0s 16ms/step - loss: 0.5028 - val_loss: 0.5660
Epoch 7/350
1/1 [==============================] - 0s 15ms/step - loss: 0.4960 - val_loss: 0.5594
Epoch 8/350
1/1 [==============================] - 0s 15ms/step - loss: 0.4891 - val_loss: 0.5527
Epoch 9/350
1/1 [==============================] - 0s 13ms/step - loss: 0.4823 - val_loss: 0.5461
Epoch 10/350
1/1 [==============================] - 0s 14ms/step - loss: 0.4754 - val_loss: 0.5394
Epoch 11/350
1/1 [==============================] - 0s 13ms/step - loss: 0.4686 - val_loss: 0.5328
Epoch 12/350
1/1 [==============================] - 0s 13ms/step - loss: 0.4619 - val_loss: 0.5261
Epoch 13/350
1/1 [==============================] - 0s 13ms/step - loss: 0.4551 - val_loss: 0.5195
Epoch 14/350
1/1 [==============================] - 0s 14ms/step - loss: 0.4484 - val_loss: 0.5128
Epoch 15/350
1/1 [==============================] - 0s 14ms/step - loss: 0.4417 - val_loss: 0.5063
Epoch 16/350
1/1 [==============================] - 0s 13ms/step - loss: 0.4350 - val_loss: 0.4999
Epoch 17/350
1/1 [==============================] - 0s 17ms/step - loss: 0.4284 - val_loss: 0.4934
Epoch 18/350
1/1 [==============================] - 0s 13ms/step - loss: 0.4218 - val_loss: 0.4869
Epoch 19/350
1/1 [==============================] - 0s 13ms/step - loss: 0.4151 - val_loss: 0.4804
Epoch 20/350
1/1 [==============================] - 0s 15ms/step - loss: 0.4085 - val_loss: 0.4740
Epoch 21/350
1/1 [==============================] - 0s 13ms/step - loss: 0.4020 - val_loss: 0.4677
Epoch 22/350
1/1 [==============================] - 0s 13ms/step - loss: 0.3956 - val_loss: 0.4613
Epoch 23/350
1/1 [==============================] - 0s 13ms/step - loss: 0.3892 - val_loss: 0.4549
Epoch 24/350
1/1 [==============================] - 0s 13ms/step - loss: 0.3829 - val_loss: 0.4488
Epoch 25/350
1/1 [==============================] - 0s 14ms/step - loss: 0.3766 - val_loss: 0.4426
Epoch 26/350
1/1 [==============================] - 0s 13ms/step - loss: 0.3705 - val_loss: 0.4366
Epoch 27/350
1/1 [==============================] - 0s 12ms/step - loss: 0.3644 - val_loss: 0.4305
Epoch 28/350
1/1 [==============================] - 0s 15ms/step - loss: 0.3585 - val_loss: 0.4245
Epoch 29/350
1/1 [==============================] - 0s 12ms/step - loss: 0.3527 - val_loss: 0.4184
Epoch 30/350
1/1 [==============================] - 0s 13ms/step - loss: 0.3470 - val_loss: 0.4125
Epoch 31/350
1/1 [==============================] - 0s 12ms/step - loss: 0.3414 - val_loss: 0.4067
Epoch 32/350
1/1 [==============================] - 0s 16ms/step - loss: 0.3358 - val_loss: 0.4008
Epoch 33/350
1/1 [==============================] - 0s 12ms/step - loss: 0.3303 - val_loss: 0.3949
Epoch 34/350
1/1 [==============================] - 0s 12ms/step - loss: 0.3248 - val_loss: 0.3889
Epoch 35/350
1/1 [==============================] - 0s 13ms/step - loss: 0.3194 - val_loss: 0.3830
Epoch 36/350
1/1 [==============================] - 0s 11ms/step - loss: 0.3142 - val_loss: 0.3771
Epoch 37/350
1/1 [==============================] - 0s 16ms/step - loss: 0.3090 - val_loss: 0.3711
Epoch 38/350
1/1 [==============================] - 0s 12ms/step - loss: 0.3039 - val_loss: 0.3653
Epoch 39/350
1/1 [==============================] - 0s 15ms/step - loss: 0.2989 - val_loss: 0.3595
Epoch 40/350
1/1 [==============================] - 0s 17ms/step - loss: 0.2939 - val_loss: 0.3540
Epoch 41/350
1/1 [==============================] - 0s 15ms/step - loss: 0.2891 - val_loss: 0.3486
Epoch 42/350
1/1 [==============================] - 0s 16ms/step - loss: 0.2844 - val_loss: 0.3432
Epoch 43/350
1/1 [==============================] - 0s 15ms/step - loss: 0.2798 - val_loss: 0.3379
Epoch 44/350
1/1 [==============================] - 0s 13ms/step - loss: 0.2752 - val_loss: 0.3327
Epoch 45/350
1/1 [==============================] - 0s 13ms/step - loss: 0.2707 - val_loss: 0.3276
Epoch 46/350
1/1 [==============================] - 0s 13ms/step - loss: 0.2662 - val_loss: 0.3227
Epoch 47/350
1/1 [==============================] - 0s 15ms/step - loss: 0.2618 - val_loss: 0.3180
Epoch 48/350
1/1 [==============================] - 0s 15ms/step - loss: 0.2575 - val_loss: 0.3133
Epoch 49/350
1/1 [==============================] - 0s 14ms/step - loss: 0.2531 - val_loss: 0.3088
Epoch 50/350
1/1 [==============================] - 0s 15ms/step - loss: 0.2489 - val_loss: 0.3044
Epoch 51/350
1/1 [==============================] - 0s 17ms/step - loss: 0.2447 - val_loss: 0.2999
Epoch 52/350
1/1 [==============================] - 0s 15ms/step - loss: 0.2406 - val_loss: 0.2955
Epoch 53/350
1/1 [==============================] - 0s 13ms/step - loss: 0.2366 - val_loss: 0.2912
Epoch 54/350
1/1 [==============================] - 0s 13ms/step - loss: 0.2327 - val_loss: 0.2870
Epoch 55/350
1/1 [==============================] - 0s 15ms/step - loss: 0.2288 - val_loss: 0.2828
Epoch 56/350
1/1 [==============================] - 0s 13ms/step - loss: 0.2250 - val_loss: 0.2789
Epoch 57/350
1/1 [==============================] - 0s 15ms/step - loss: 0.2213 - val_loss: 0.2750
Epoch 58/350
1/1 [==============================] - 0s 12ms/step - loss: 0.2177 - val_loss: 0.2713
Epoch 59/350
1/1 [==============================] - 0s 14ms/step - loss: 0.2142 - val_loss: 0.2677
Epoch 60/350
1/1 [==============================] - 0s 14ms/step - loss: 0.2108 - val_loss: 0.2642
Epoch 61/350
1/1 [==============================] - 0s 14ms/step - loss: 0.2075 - val_loss: 0.2609
Epoch 62/350
1/1 [==============================] - 0s 14ms/step - loss: 0.2043 - val_loss: 0.2576
Epoch 63/350
1/1 [==============================] - 0s 14ms/step - loss: 0.2012 - val_loss: 0.2545
Epoch 64/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1981 - val_loss: 0.2515
Epoch 65/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1950 - val_loss: 0.2485
Epoch 66/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1920 - val_loss: 0.2457
Epoch 67/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1891 - val_loss: 0.2430
Epoch 68/350
1/1 [==============================] - 0s 17ms/step - loss: 0.1863 - val_loss: 0.2404
Epoch 69/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1837 - val_loss: 0.2379
Epoch 70/350
1/1 [==============================] - 0s 14ms/step - loss: 0.1811 - val_loss: 0.2355
Epoch 71/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1787 - val_loss: 0.2333
Epoch 72/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1764 - val_loss: 0.2312
Epoch 73/350
1/1 [==============================] - 0s 16ms/step - loss: 0.1742 - val_loss: 0.2292
Epoch 74/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1720 - val_loss: 0.2273
Epoch 75/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1699 - val_loss: 0.2255
Epoch 76/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1680 - val_loss: 0.2239
Epoch 77/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1662 - val_loss: 0.2225
Epoch 78/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1644 - val_loss: 0.2212
Epoch 79/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1628 - val_loss: 0.2201
Epoch 80/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1612 - val_loss: 0.2191
Epoch 81/350
1/1 [==============================] - 0s 14ms/step - loss: 0.1597 - val_loss: 0.2182
Epoch 82/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1582 - val_loss: 0.2175
Epoch 83/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1569 - val_loss: 0.2169
Epoch 84/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1555 - val_loss: 0.2164
Epoch 85/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1542 - val_loss: 0.2160
Epoch 86/350
1/1 [==============================] - 0s 14ms/step - loss: 0.1530 - val_loss: 0.2158
Epoch 87/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1518 - val_loss: 0.2157
Epoch 88/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1506 - val_loss: 0.2157
Epoch 89/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1495 - val_loss: 0.2158
Epoch 90/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1483 - val_loss: 0.2160
Epoch 91/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1472 - val_loss: 0.2162
Epoch 92/350
1/1 [==============================] - 0s 17ms/step - loss: 0.1461 - val_loss: 0.2165
Epoch 93/350
1/1 [==============================] - 0s 16ms/step - loss: 0.1451 - val_loss: 0.2169
Epoch 94/350
1/1 [==============================] - 0s 16ms/step - loss: 0.1442 - val_loss: 0.2174
Epoch 95/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1432 - val_loss: 0.2180
Epoch 96/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1422 - val_loss: 0.2186
Epoch 97/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1413 - val_loss: 0.2193
Epoch 98/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1404 - val_loss: 0.2201
Epoch 99/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1394 - val_loss: 0.2209
Epoch 100/350
1/1 [==============================] - 0s 17ms/step - loss: 0.1386 - val_loss: 0.2217
Epoch 101/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1377 - val_loss: 0.2226
Epoch 102/350
1/1 [==============================] - 0s 14ms/step - loss: 0.1368 - val_loss: 0.2234
Epoch 103/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1360 - val_loss: 0.2242
Epoch 104/350
1/1 [==============================] - 0s 16ms/step - loss: 0.1352 - val_loss: 0.2249
Epoch 105/350
1/1 [==============================] - 0s 16ms/step - loss: 0.1344 - val_loss: 0.2257
Epoch 106/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1335 - val_loss: 0.2263
Epoch 107/350
1/1 [==============================] - 0s 14ms/step - loss: 0.1328 - val_loss: 0.2269
Epoch 108/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1320 - val_loss: 0.2274
Epoch 109/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1313 - val_loss: 0.2278
Epoch 110/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1306 - val_loss: 0.2280
Epoch 111/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1300 - val_loss: 0.2281
Epoch 112/350
1/1 [==============================] - 0s 16ms/step - loss: 0.1293 - val_loss: 0.2282
Epoch 113/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1286 - val_loss: 0.2281
Epoch 114/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1279 - val_loss: 0.2280
Epoch 115/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1272 - val_loss: 0.2277
Epoch 116/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1266 - val_loss: 0.2274
Epoch 117/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1259 - val_loss: 0.2270
Epoch 118/350
1/1 [==============================] - 0s 14ms/step - loss: 0.1253 - val_loss: 0.2265
Epoch 119/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1246 - val_loss: 0.2259
Epoch 120/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1240 - val_loss: 0.2252
Epoch 121/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1233 - val_loss: 0.2244
Epoch 122/350
1/1 [==============================] - 0s 14ms/step - loss: 0.1227 - val_loss: 0.2235
Epoch 123/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1220 - val_loss: 0.2226
Epoch 124/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1214 - val_loss: 0.2217
Epoch 125/350
1/1 [==============================] - 0s 14ms/step - loss: 0.1207 - val_loss: 0.2206
Epoch 126/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1201 - val_loss: 0.2196
Epoch 127/350
1/1 [==============================] - 0s 14ms/step - loss: 0.1194 - val_loss: 0.2185
Epoch 128/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1188 - val_loss: 0.2174
Epoch 129/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1181 - val_loss: 0.2163
Epoch 130/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1175 - val_loss: 0.2152
Epoch 131/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1168 - val_loss: 0.2141
Epoch 132/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1161 - val_loss: 0.2129
Epoch 133/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1155 - val_loss: 0.2118
Epoch 134/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1148 - val_loss: 0.2106
Epoch 135/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1141 - val_loss: 0.2094
Epoch 136/350
1/1 [==============================] - 0s 14ms/step - loss: 0.1134 - val_loss: 0.2082
Epoch 137/350
1/1 [==============================] - 0s 14ms/step - loss: 0.1127 - val_loss: 0.2070
Epoch 138/350
1/1 [==============================] - 0s 17ms/step - loss: 0.1120 - val_loss: 0.2057
Epoch 139/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1114 - val_loss: 0.2045
Epoch 140/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1107 - val_loss: 0.2032
Epoch 141/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1100 - val_loss: 0.2019
Epoch 142/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1093 - val_loss: 0.2006
Epoch 143/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1085 - val_loss: 0.1993
Epoch 144/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1078 - val_loss: 0.1979
Epoch 145/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1071 - val_loss: 0.1966
Epoch 146/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1064 - val_loss: 0.1953
Epoch 147/350
1/1 [==============================] - 0s 14ms/step - loss: 0.1056 - val_loss: 0.1940
Epoch 148/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1049 - val_loss: 0.1927
Epoch 149/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1042 - val_loss: 0.1913
Epoch 150/350
1/1 [==============================] - 0s 15ms/step - loss: 0.1034 - val_loss: 0.1901
Epoch 151/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1027 - val_loss: 0.1888
Epoch 152/350
1/1 [==============================] - 0s 13ms/step - loss: 0.1020 - val_loss: 0.1875
Epoch 153/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1012 - val_loss: 0.1862
Epoch 154/350
1/1 [==============================] - 0s 12ms/step - loss: 0.1005 - val_loss: 0.1849
Epoch 155/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0997 - val_loss: 0.1836
Epoch 156/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0990 - val_loss: 0.1823
Epoch 157/350
1/1 [==============================] - 0s 11ms/step - loss: 0.0982 - val_loss: 0.1809
Epoch 158/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0975 - val_loss: 0.1796
Epoch 159/350
1/1 [==============================] - 0s 11ms/step - loss: 0.0968 - val_loss: 0.1782
Epoch 160/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0960 - val_loss: 0.1769
Epoch 161/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0953 - val_loss: 0.1756
Epoch 162/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0946 - val_loss: 0.1743
Epoch 163/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0939 - val_loss: 0.1729
Epoch 164/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0932 - val_loss: 0.1715
Epoch 165/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0924 - val_loss: 0.1702
Epoch 166/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0917 - val_loss: 0.1688
Epoch 167/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0910 - val_loss: 0.1674
Epoch 168/350
1/1 [==============================] - 0s 11ms/step - loss: 0.0903 - val_loss: 0.1661
Epoch 169/350
1/1 [==============================] - 0s 11ms/step - loss: 0.0896 - val_loss: 0.1648
Epoch 170/350
1/1 [==============================] - 0s 11ms/step - loss: 0.0889 - val_loss: 0.1635
Epoch 171/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0882 - val_loss: 0.1622
Epoch 172/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0875 - val_loss: 0.1609
Epoch 173/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0868 - val_loss: 0.1595
Epoch 174/350
1/1 [==============================] - 0s 16ms/step - loss: 0.0862 - val_loss: 0.1582
Epoch 175/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0855 - val_loss: 0.1570
Epoch 176/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0849 - val_loss: 0.1557
Epoch 177/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0843 - val_loss: 0.1545
Epoch 178/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0836 - val_loss: 0.1533
Epoch 179/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0830 - val_loss: 0.1521
Epoch 180/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0825 - val_loss: 0.1509
Epoch 181/350
1/1 [==============================] - 0s 16ms/step - loss: 0.0819 - val_loss: 0.1498
Epoch 182/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0813 - val_loss: 0.1488
Epoch 183/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0808 - val_loss: 0.1478
Epoch 184/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0802 - val_loss: 0.1468
Epoch 185/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0797 - val_loss: 0.1459
Epoch 186/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0791 - val_loss: 0.1450
Epoch 187/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0786 - val_loss: 0.1441
Epoch 188/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0781 - val_loss: 0.1433
Epoch 189/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0776 - val_loss: 0.1425
Epoch 190/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0771 - val_loss: 0.1416
Epoch 191/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0766 - val_loss: 0.1408
Epoch 192/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0762 - val_loss: 0.1399
Epoch 193/350
1/1 [==============================] - 0s 11ms/step - loss: 0.0757 - val_loss: 0.1390
Epoch 194/350
1/1 [==============================] - 0s 16ms/step - loss: 0.0753 - val_loss: 0.1381
Epoch 195/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0749 - val_loss: 0.1372
Epoch 196/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0745 - val_loss: 0.1364
Epoch 197/350
1/1 [==============================] - 0s 16ms/step - loss: 0.0741 - val_loss: 0.1356
Epoch 198/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0737 - val_loss: 0.1348
Epoch 199/350
1/1 [==============================] - 0s 16ms/step - loss: 0.0734 - val_loss: 0.1340
Epoch 200/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0730 - val_loss: 0.1332
Epoch 201/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0727 - val_loss: 0.1324
Epoch 202/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0724 - val_loss: 0.1316
Epoch 203/350
1/1 [==============================] - 0s 16ms/step - loss: 0.0721 - val_loss: 0.1309
Epoch 204/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0718 - val_loss: 0.1301
Epoch 205/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0715 - val_loss: 0.1295
Epoch 206/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0712 - val_loss: 0.1290
Epoch 207/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0709 - val_loss: 0.1285
Epoch 208/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0706 - val_loss: 0.1280
Epoch 209/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0704 - val_loss: 0.1275
Epoch 210/350
1/1 [==============================] - 0s 11ms/step - loss: 0.0701 - val_loss: 0.1271
Epoch 211/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0699 - val_loss: 0.1267
Epoch 212/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0697 - val_loss: 0.1264
Epoch 213/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0694 - val_loss: 0.1260
Epoch 214/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0692 - val_loss: 0.1257
Epoch 215/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0690 - val_loss: 0.1254
Epoch 216/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0688 - val_loss: 0.1251
Epoch 217/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0686 - val_loss: 0.1248
Epoch 218/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0684 - val_loss: 0.1245
Epoch 219/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0683 - val_loss: 0.1244
Epoch 220/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0681 - val_loss: 0.1242
Epoch 221/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0679 - val_loss: 0.1240
Epoch 222/350
1/1 [==============================] - 0s 16ms/step - loss: 0.0678 - val_loss: 0.1238
Epoch 223/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0677 - val_loss: 0.1236
Epoch 224/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0676 - val_loss: 0.1235
Epoch 225/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0674 - val_loss: 0.1233
Epoch 226/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0673 - val_loss: 0.1232
Epoch 227/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0672 - val_loss: 0.1231
Epoch 228/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0671 - val_loss: 0.1230
Epoch 229/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0670 - val_loss: 0.1229
Epoch 230/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0670 - val_loss: 0.1228
Epoch 231/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0669 - val_loss: 0.1228
Epoch 232/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0668 - val_loss: 0.1228
Epoch 233/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0667 - val_loss: 0.1228
Epoch 234/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0667 - val_loss: 0.1229
Epoch 235/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0666 - val_loss: 0.1229
Epoch 236/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0666 - val_loss: 0.1229
Epoch 237/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0665 - val_loss: 0.1230
Epoch 238/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0665 - val_loss: 0.1230
Epoch 239/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0665 - val_loss: 0.1231
Epoch 240/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0664 - val_loss: 0.1232
Epoch 241/350
1/1 [==============================] - 0s 17ms/step - loss: 0.0664 - val_loss: 0.1232
Epoch 242/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0664 - val_loss: 0.1233
Epoch 243/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0663 - val_loss: 0.1234
Epoch 244/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0663 - val_loss: 0.1235
Epoch 245/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0663 - val_loss: 0.1236
Epoch 246/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0663 - val_loss: 0.1237
Epoch 247/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0663 - val_loss: 0.1238
Epoch 248/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0663 - val_loss: 0.1239
Epoch 249/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0662 - val_loss: 0.1240
Epoch 250/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0662 - val_loss: 0.1240
Epoch 251/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0662 - val_loss: 0.1241
Epoch 252/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0662 - val_loss: 0.1241
Epoch 253/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0662 - val_loss: 0.1241
Epoch 254/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0662 - val_loss: 0.1241
Epoch 255/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0662 - val_loss: 0.1241
Epoch 256/350
1/1 [==============================] - 0s 16ms/step - loss: 0.0662 - val_loss: 0.1241
Epoch 257/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0662 - val_loss: 0.1241
Epoch 258/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0662 - val_loss: 0.1240
Epoch 259/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0662 - val_loss: 0.1240
Epoch 260/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0662 - val_loss: 0.1240
Epoch 261/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0662 - val_loss: 0.1239
Epoch 262/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0662 - val_loss: 0.1239
Epoch 263/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0662 - val_loss: 0.1239
Epoch 264/350
1/1 [==============================] - 0s 17ms/step - loss: 0.0661 - val_loss: 0.1238
Epoch 265/350
1/1 [==============================] - 0s 17ms/step - loss: 0.0661 - val_loss: 0.1237
Epoch 266/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0661 - val_loss: 0.1237
Epoch 267/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0661 - val_loss: 0.1236
Epoch 268/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0661 - val_loss: 0.1236
Epoch 269/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0661 - val_loss: 0.1235
Epoch 270/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0661 - val_loss: 0.1235
Epoch 271/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0661 - val_loss: 0.1235
Epoch 272/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0661 - val_loss: 0.1234
Epoch 273/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0661 - val_loss: 0.1234
Epoch 274/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0661 - val_loss: 0.1234
Epoch 275/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0661 - val_loss: 0.1234
Epoch 276/350
1/1 [==============================] - 0s 16ms/step - loss: 0.0661 - val_loss: 0.1234
Epoch 277/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0661 - val_loss: 0.1233
Epoch 278/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0661 - val_loss: 0.1233
Epoch 279/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0661 - val_loss: 0.1233
Epoch 280/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0661 - val_loss: 0.1232
Epoch 281/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0661 - val_loss: 0.1232
Epoch 282/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0661 - val_loss: 0.1231
Epoch 283/350
1/1 [==============================] - 0s 11ms/step - loss: 0.0661 - val_loss: 0.1231
Epoch 284/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0661 - val_loss: 0.1230
Epoch 285/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0661 - val_loss: 0.1230
Epoch 286/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0661 - val_loss: 0.1230
Epoch 287/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0661 - val_loss: 0.1229
Epoch 288/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0660 - val_loss: 0.1229
Epoch 289/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1229
Epoch 290/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1229
Epoch 291/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1228
Epoch 292/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0660 - val_loss: 0.1228
Epoch 293/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1228
Epoch 294/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1228
Epoch 295/350
1/1 [==============================] - 0s 11ms/step - loss: 0.0660 - val_loss: 0.1228
Epoch 296/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1228
Epoch 297/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1228
Epoch 298/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0660 - val_loss: 0.1228
Epoch 299/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1227
Epoch 300/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1227
Epoch 301/350
1/1 [==============================] - 0s 16ms/step - loss: 0.0660 - val_loss: 0.1227
Epoch 302/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1227
Epoch 303/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1227
Epoch 304/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0660 - val_loss: 0.1227
Epoch 305/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1227
Epoch 306/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1227
Epoch 307/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0660 - val_loss: 0.1226
Epoch 308/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0660 - val_loss: 0.1226
Epoch 309/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1226
Epoch 310/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1226
Epoch 311/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1226
Epoch 312/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1226
Epoch 313/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1226
Epoch 314/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1226
Epoch 315/350
1/1 [==============================] - 0s 11ms/step - loss: 0.0660 - val_loss: 0.1226
Epoch 316/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1225
Epoch 317/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1225
Epoch 318/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0660 - val_loss: 0.1225
Epoch 319/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1225
Epoch 320/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1225
Epoch 321/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1224
Epoch 322/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0660 - val_loss: 0.1224
Epoch 323/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1224
Epoch 324/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0660 - val_loss: 0.1224
Epoch 325/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0660 - val_loss: 0.1224
Epoch 326/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0660 - val_loss: 0.1224
Epoch 327/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1224
Epoch 328/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1224
Epoch 329/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1223
Epoch 330/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1223
Epoch 331/350
1/1 [==============================] - 0s 16ms/step - loss: 0.0660 - val_loss: 0.1223
Epoch 332/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1223
Epoch 333/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1223
Epoch 334/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0660 - val_loss: 0.1223
Epoch 335/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1222
Epoch 336/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1222
Epoch 337/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1222
Epoch 338/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1222
Epoch 339/350
1/1 [==============================] - 0s 12ms/step - loss: 0.0660 - val_loss: 0.1222
Epoch 340/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0660 - val_loss: 0.1222
Epoch 341/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0660 - val_loss: 0.1222
Epoch 342/350
1/1 [==============================] - 0s 16ms/step - loss: 0.0659 - val_loss: 0.1222
Epoch 343/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0659 - val_loss: 0.1222
Epoch 344/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0659 - val_loss: 0.1221
Epoch 345/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0659 - val_loss: 0.1221
Epoch 346/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0659 - val_loss: 0.1221
Epoch 347/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0659 - val_loss: 0.1221
Epoch 348/350
1/1 [==============================] - 0s 15ms/step - loss: 0.0659 - val_loss: 0.1221
Epoch 349/350
1/1 [==============================] - 0s 13ms/step - loss: 0.0659 - val_loss: 0.1221
Epoch 350/350
1/1 [==============================] - 0s 14ms/step - loss: 0.0659 - val_loss: 0.1221
In [279]:
# let's analyze one of the layers so see the correlations between
# the categorical values

layer = m.get_layer('mnth')
weights = layer.get_weights()[0]
for i in range(1, weights.shape[0]):
  u = weights[i]
  u = u / np.linalg.norm(u)
  print(i)
  for j in range(1, weights.shape[0]):
    v = weights[j]
    v = v / np.linalg.norm(v)
    print('    ', j, np.dot(u, v))
1
     1 1.0
     2 -0.14496969
     3 -0.8473991
     4 -0.47241148
     5 -0.49715653
     6 -0.63935417
     7 -0.611215
     8 -0.6313006
     9 -0.5967953
     10 -0.6548884
     11 -0.783477
     12 -0.17759678
2
     1 -0.14496969
     2 1.0
     3 0.10009371
     4 -0.48850852
     5 -0.40698156
     6 -0.36555913
     7 -0.054778505
     8 -0.34399512
     9 -0.40515247
     10 -0.23503435
     11 0.09523056
     12 -0.0028539468
3
     1 -0.8473991
     2 0.10009371
     3 0.99999994
     4 0.5074702
     5 0.6030794
     6 0.6880476
     7 0.689814
     8 0.7220048
     9 0.6680979
     10 0.68556476
     11 0.82379985
     12 0.37005067
4
     1 -0.47241148
     2 -0.48850852
     3 0.5074702
     4 1.0
     5 0.94466287
     6 0.8964036
     7 0.872766
     8 0.8856076
     9 0.89026636
     10 0.82736117
     11 0.6387842
     12 0.35075763
5
     1 -0.49715653
     2 -0.40698156
     3 0.6030794
     4 0.94466287
     5 1.0
     6 0.89466524
     7 0.9042355
     8 0.91521597
     9 0.92794555
     10 0.92656654
     11 0.688097
     12 0.5558706
6
     1 -0.63935417
     2 -0.36555913
     3 0.6880476
     4 0.8964036
     5 0.89466524
     6 1.0
     7 0.8836713
     8 0.91519696
     9 0.96283937
     10 0.8869529
     11 0.64831734
     12 0.55903494
7
     1 -0.611215
     2 -0.054778505
     3 0.689814
     4 0.872766
     5 0.9042355
     6 0.8836713
     7 1.0
     8 0.89104503
     9 0.84201884
     10 0.8594391
     11 0.7537739
     12 0.5398556
8
     1 -0.6313006
     2 -0.34399512
     3 0.7220048
     4 0.8856076
     5 0.91521597
     6 0.91519696
     7 0.89104503
     8 0.99999994
     9 0.8884546
     10 0.8736373
     11 0.6119035
     12 0.5399716
9
     1 -0.5967953
     2 -0.40515247
     3 0.6680979
     4 0.89026636
     5 0.92794555
     6 0.96283937
     7 0.84201884
     8 0.8884546
     9 0.9999999
     10 0.95357686
     11 0.6584174
     12 0.62780404
10
     1 -0.6548884
     2 -0.23503435
     3 0.68556476
     4 0.82736117
     5 0.92656654
     6 0.8869529
     7 0.8594391
     8 0.8736373
     9 0.95357686
     10 1.0
     11 0.70306504
     12 0.68660784
11
     1 -0.783477
     2 0.09523056
     3 0.82379985
     4 0.6387842
     5 0.688097
     6 0.64831734
     7 0.7537739
     8 0.6119035
     9 0.6584174
     10 0.70306504
     11 1.0000001
     12 0.1621752
12
     1 -0.17759678
     2 -0.0028539468
     3 0.37005067
     4 0.35075763
     5 0.5558706
     6 0.55903494
     7 0.5398556
     8 0.5399716
     9 0.62780404
     10 0.68660784
     11 0.1621752
     12 1.0
In [280]:
# let's create a new dataframe with the embedding values
x_new = df[['temp', 'hum', 'windspeed']].values
for i in range(len(df_emb_vars)):
    values = df[df_emb_vars[i]].values
    layer = m.get_layer(df_emb_vars[i])
    weights = layer.get_weights()[0]
    emb_values = weights[values]
    x_new = np.concatenate([x_new, emb_values], axis=-1)
In [282]:
train_size = int(0.9 * len(x_new))

x_train = x_new[:train_size]
y_train = y[:train_size]
x_test = x_new[train_size:]
y_test = y[train_size:]
x_train.shape, y_train.shape, x_test.shape, y_test.shape
Out[282]:
((657, 27), (657,), (74, 27), (74,))
In [284]:
# create your model here
from sklearn.ensemble import GradientBoostingRegressor
gb_model = GradientBoostingRegressor(random_state=0)
In [285]:
gb_model.fit(x_train, y_train)
Out[285]:
GradientBoostingRegressor(random_state=0)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
GradientBoostingRegressor(random_state=0)
In [286]:
p_test = gb_model.predict(x_test)
In [287]:
plt.plot(y_test, color='blue')
plt.plot(p_test, color='red')
plt.show()
In [288]:
train_pred = gb_model.predict(x_train)
In [304]:
from sklearn.metrics import mean_absolute_error
def evaluate_model(model, x_train, y_train, x_test, y_test):
    print('\n================== Training performance: ==================') 
    train_pred = model.predict(x_train)
    train_mae = round(mean_absolute_error(train_pred, y_train), 2)
    print('MAE = ', train_mae)

    print('\n================== Testing performance: ==================') 
    test_pred = model.predict(x_test)
    test_mae = round(mean_absolute_error(test_pred, y_test), 2)
    print('MAE = ', test_mae)

    return train_mae, test_mae
In [305]:
rfr_train_mae, rfr_test_mae = evaluate_model(gb_model, x_train, y_train, x_test, y_test)
================== Training performance: ==================
MAE =  292.25

================== Testing performance: ==================
MAE =  809.53
Deep learning model¶
In [311]:
mean_train = y_train.mean()
std_train = y_train.std()

y_train_norm = (y_train - mean_train) / std_train
y_test_norm = (y_test - mean_train) / std_train
In [312]:
inputs = Input(shape=(x_train.shape[1],))

x = Dense(1024, activation='relu',
          kernel_regularizer=regularizers.l2(0.02))(inputs)
x = Dropout(.5)(x)
x = Dense(512, activation='relu',
          kernel_regularizer=regularizers.l2(0.02))(x)
x = Dropout(.5)(x)
x = Dense(512, activation='relu',
          kernel_regularizer=regularizers.l2(0.02))(x)
outputs = Dense(1)(x)

# define and compile the model:
model = Model(inputs=inputs, outputs=outputs)
model.compile(loss= "mae" ,
              optimizer="adam",
              metrics=["mae"])

# prepare the data:
# X_train, X_val = embed_data_preprocess(d, dval, cat_features, num_features)
In [313]:
history = model.fit(x_train, y_train_norm,
                    validation_data = (x_test, y_test_norm),
                    batch_size=1024, epochs=150)#, callbacks=[clr])
Epoch 1/150
1/1 [==============================] - 0s 307ms/step - loss: 25.7736 - mae: 0.8200 - val_loss: 24.6626 - val_mae: 0.7846
Epoch 2/150
1/1 [==============================] - 0s 14ms/step - loss: 24.6894 - mae: 0.8114 - val_loss: 23.6090 - val_mae: 0.7743
Epoch 3/150
1/1 [==============================] - 0s 14ms/step - loss: 23.6362 - mae: 0.8016 - val_loss: 22.5912 - val_mae: 0.7676
Epoch 4/150
1/1 [==============================] - 0s 15ms/step - loss: 22.6205 - mae: 0.7970 - val_loss: 21.6072 - val_mae: 0.7625
Epoch 5/150
1/1 [==============================] - 0s 13ms/step - loss: 21.6353 - mae: 0.7906 - val_loss: 20.6565 - val_mae: 0.7583
Epoch 6/150
1/1 [==============================] - 0s 15ms/step - loss: 20.6812 - mae: 0.7830 - val_loss: 19.7387 - val_mae: 0.7550
Epoch 7/150
1/1 [==============================] - 0s 13ms/step - loss: 19.7585 - mae: 0.7748 - val_loss: 18.8532 - val_mae: 0.7520
Epoch 8/150
1/1 [==============================] - 0s 12ms/step - loss: 18.8690 - mae: 0.7678 - val_loss: 17.9997 - val_mae: 0.7494
Epoch 9/150
1/1 [==============================] - 0s 12ms/step - loss: 18.0076 - mae: 0.7573 - val_loss: 17.1777 - val_mae: 0.7469
Epoch 10/150
1/1 [==============================] - 0s 15ms/step - loss: 17.1819 - mae: 0.7511 - val_loss: 16.3855 - val_mae: 0.7433
Epoch 11/150
1/1 [==============================] - 0s 12ms/step - loss: 16.3832 - mae: 0.7410 - val_loss: 15.6216 - val_mae: 0.7373
Epoch 12/150
1/1 [==============================] - 0s 13ms/step - loss: 15.6110 - mae: 0.7267 - val_loss: 14.8858 - val_mae: 0.7293
Epoch 13/150
1/1 [==============================] - 0s 15ms/step - loss: 14.8674 - mae: 0.7110 - val_loss: 14.1785 - val_mae: 0.7203
Epoch 14/150
1/1 [==============================] - 0s 14ms/step - loss: 14.1515 - mae: 0.6932 - val_loss: 13.4964 - val_mae: 0.7069
Epoch 15/150
1/1 [==============================] - 0s 14ms/step - loss: 13.4645 - mae: 0.6751 - val_loss: 12.8410 - val_mae: 0.6918
Epoch 16/150
1/1 [==============================] - 0s 16ms/step - loss: 12.7998 - mae: 0.6506 - val_loss: 12.2138 - val_mae: 0.6767
Epoch 17/150
1/1 [==============================] - 0s 14ms/step - loss: 12.1655 - mae: 0.6284 - val_loss: 11.6116 - val_mae: 0.6592
Epoch 18/150
1/1 [==============================] - 0s 13ms/step - loss: 11.5486 - mae: 0.5962 - val_loss: 11.0382 - val_mae: 0.6438
Epoch 19/150
1/1 [==============================] - 0s 12ms/step - loss: 10.9553 - mae: 0.5610 - val_loss: 10.4918 - val_mae: 0.6292
Epoch 20/150
1/1 [==============================] - 0s 16ms/step - loss: 10.3916 - mae: 0.5290 - val_loss: 9.9609 - val_mae: 0.6048
Epoch 21/150
1/1 [==============================] - 0s 12ms/step - loss: 9.8495 - mae: 0.4934 - val_loss: 9.4532 - val_mae: 0.5791
Epoch 22/150
1/1 [==============================] - 0s 14ms/step - loss: 9.3327 - mae: 0.4586 - val_loss: 8.9725 - val_mae: 0.5575
Epoch 23/150
1/1 [==============================] - 0s 13ms/step - loss: 8.8460 - mae: 0.4310 - val_loss: 8.5149 - val_mae: 0.5380
Epoch 24/150
1/1 [==============================] - 0s 13ms/step - loss: 8.3734 - mae: 0.3965 - val_loss: 8.0815 - val_mae: 0.5231
Epoch 25/150
1/1 [==============================] - 0s 15ms/step - loss: 7.9260 - mae: 0.3676 - val_loss: 7.6637 - val_mae: 0.5067
Epoch 26/150
1/1 [==============================] - 0s 13ms/step - loss: 7.5208 - mae: 0.3639 - val_loss: 7.2605 - val_mae: 0.4894
Epoch 27/150
1/1 [==============================] - 0s 14ms/step - loss: 7.1500 - mae: 0.3789 - val_loss: 6.8793 - val_mae: 0.4809
Epoch 28/150
1/1 [==============================] - 0s 12ms/step - loss: 6.7655 - mae: 0.3671 - val_loss: 6.5183 - val_mae: 0.4785
Epoch 29/150
1/1 [==============================] - 0s 14ms/step - loss: 6.3978 - mae: 0.3580 - val_loss: 6.1790 - val_mae: 0.4828
Epoch 30/150
1/1 [==============================] - 0s 12ms/step - loss: 6.0357 - mae: 0.3395 - val_loss: 5.8575 - val_mae: 0.4890
Epoch 31/150
1/1 [==============================] - 0s 13ms/step - loss: 5.7079 - mae: 0.3394 - val_loss: 5.5537 - val_mae: 0.4960
Epoch 32/150
1/1 [==============================] - 0s 14ms/step - loss: 5.4116 - mae: 0.3540 - val_loss: 5.2748 - val_mae: 0.5113
Epoch 33/150
1/1 [==============================] - 0s 16ms/step - loss: 5.1118 - mae: 0.3483 - val_loss: 5.0022 - val_mae: 0.5158
Epoch 34/150
1/1 [==============================] - 0s 14ms/step - loss: 4.8399 - mae: 0.3535 - val_loss: 4.7306 - val_mae: 0.5047
Epoch 35/150
1/1 [==============================] - 0s 15ms/step - loss: 4.5911 - mae: 0.3652 - val_loss: 4.4812 - val_mae: 0.5009
Epoch 36/150
1/1 [==============================] - 0s 12ms/step - loss: 4.3369 - mae: 0.3565 - val_loss: 4.2447 - val_mae: 0.4957
Epoch 37/150
1/1 [==============================] - 0s 12ms/step - loss: 4.0876 - mae: 0.3386 - val_loss: 4.0194 - val_mae: 0.4884
Epoch 38/150
1/1 [==============================] - 0s 15ms/step - loss: 3.8765 - mae: 0.3455 - val_loss: 3.8058 - val_mae: 0.4796
Epoch 39/150
1/1 [==============================] - 0s 12ms/step - loss: 3.6629 - mae: 0.3367 - val_loss: 3.6221 - val_mae: 0.4890
Epoch 40/150
1/1 [==============================] - 0s 13ms/step - loss: 3.4517 - mae: 0.3186 - val_loss: 3.4565 - val_mae: 0.5073
Epoch 41/150
1/1 [==============================] - 0s 14ms/step - loss: 3.2723 - mae: 0.3231 - val_loss: 3.2451 - val_mae: 0.4717
Epoch 42/150
1/1 [==============================] - 0s 14ms/step - loss: 3.0896 - mae: 0.3162 - val_loss: 3.0802 - val_mae: 0.4737
Epoch 43/150
1/1 [==============================] - 0s 12ms/step - loss: 2.9288 - mae: 0.3223 - val_loss: 2.9527 - val_mae: 0.5046
Epoch 44/150
1/1 [==============================] - 0s 12ms/step - loss: 2.7706 - mae: 0.3225 - val_loss: 2.7876 - val_mae: 0.4897
Epoch 45/150
1/1 [==============================] - 0s 12ms/step - loss: 2.6135 - mae: 0.3157 - val_loss: 2.6312 - val_mae: 0.4755
Epoch 46/150
1/1 [==============================] - 0s 12ms/step - loss: 2.4815 - mae: 0.3259 - val_loss: 2.5009 - val_mae: 0.4780
Epoch 47/150
1/1 [==============================] - 0s 15ms/step - loss: 2.3392 - mae: 0.3162 - val_loss: 2.3834 - val_mae: 0.4842
Epoch 48/150
1/1 [==============================] - 0s 14ms/step - loss: 2.2222 - mae: 0.3231 - val_loss: 2.2738 - val_mae: 0.4900
Epoch 49/150
1/1 [==============================] - 0s 14ms/step - loss: 2.1033 - mae: 0.3194 - val_loss: 2.1457 - val_mae: 0.4703
Epoch 50/150
1/1 [==============================] - 0s 12ms/step - loss: 1.9970 - mae: 0.3216 - val_loss: 2.0385 - val_mae: 0.4640
Epoch 51/150
1/1 [==============================] - 0s 14ms/step - loss: 1.8943 - mae: 0.3198 - val_loss: 1.9433 - val_mae: 0.4628
Epoch 52/150
1/1 [==============================] - 0s 14ms/step - loss: 1.7859 - mae: 0.3055 - val_loss: 1.8770 - val_mae: 0.4847
Epoch 53/150
1/1 [==============================] - 0s 12ms/step - loss: 1.6989 - mae: 0.3066 - val_loss: 1.7631 - val_mae: 0.4545
Epoch 54/150
1/1 [==============================] - 0s 13ms/step - loss: 1.6225 - mae: 0.3139 - val_loss: 1.6752 - val_mae: 0.4460
Epoch 55/150
1/1 [==============================] - 0s 14ms/step - loss: 1.5430 - mae: 0.3137 - val_loss: 1.6031 - val_mae: 0.4483
Epoch 56/150
1/1 [==============================] - 0s 13ms/step - loss: 1.4609 - mae: 0.3061 - val_loss: 1.5639 - val_mae: 0.4796
Epoch 57/150
1/1 [==============================] - 0s 12ms/step - loss: 1.3858 - mae: 0.3015 - val_loss: 1.4868 - val_mae: 0.4696
Epoch 58/150
1/1 [==============================] - 0s 12ms/step - loss: 1.3261 - mae: 0.3088 - val_loss: 1.4075 - val_mae: 0.4537
Epoch 59/150
1/1 [==============================] - 0s 12ms/step - loss: 1.2617 - mae: 0.3078 - val_loss: 1.3512 - val_mae: 0.4560
Epoch 60/150
1/1 [==============================] - 0s 12ms/step - loss: 1.1987 - mae: 0.3035 - val_loss: 1.3162 - val_mae: 0.4747
Epoch 61/150
1/1 [==============================] - 0s 12ms/step - loss: 1.1511 - mae: 0.3096 - val_loss: 1.2794 - val_mae: 0.4876
Epoch 62/150
1/1 [==============================] - 0s 12ms/step - loss: 1.1080 - mae: 0.3161 - val_loss: 1.2030 - val_mae: 0.4578
Epoch 63/150
1/1 [==============================] - 0s 15ms/step - loss: 1.0473 - mae: 0.3022 - val_loss: 1.1520 - val_mae: 0.4499
Epoch 64/150
1/1 [==============================] - 0s 14ms/step - loss: 1.0104 - mae: 0.3083 - val_loss: 1.1288 - val_mae: 0.4663
Epoch 65/150
1/1 [==============================] - 0s 12ms/step - loss: 0.9684 - mae: 0.3059 - val_loss: 1.0969 - val_mae: 0.4720
Epoch 66/150
1/1 [==============================] - 0s 12ms/step - loss: 0.9212 - mae: 0.2962 - val_loss: 1.0399 - val_mae: 0.4512
Epoch 67/150
1/1 [==============================] - 0s 14ms/step - loss: 0.8815 - mae: 0.2928 - val_loss: 1.0111 - val_mae: 0.4560
Epoch 68/150
1/1 [==============================] - 0s 12ms/step - loss: 0.8597 - mae: 0.3046 - val_loss: 0.9878 - val_mae: 0.4647
Epoch 69/150
1/1 [==============================] - 0s 12ms/step - loss: 0.8151 - mae: 0.2920 - val_loss: 0.9549 - val_mae: 0.4621
Epoch 70/150
1/1 [==============================] - 0s 12ms/step - loss: 0.7832 - mae: 0.2903 - val_loss: 0.9133 - val_mae: 0.4488
Epoch 71/150
1/1 [==============================] - 0s 12ms/step - loss: 0.7577 - mae: 0.2933 - val_loss: 0.8883 - val_mae: 0.4498
Epoch 72/150
1/1 [==============================] - 0s 13ms/step - loss: 0.7407 - mae: 0.3023 - val_loss: 0.8803 - val_mae: 0.4648
Epoch 73/150
1/1 [==============================] - 0s 12ms/step - loss: 0.7116 - mae: 0.2961 - val_loss: 0.8809 - val_mae: 0.4863
Epoch 74/150
1/1 [==============================] - 0s 14ms/step - loss: 0.7007 - mae: 0.3061 - val_loss: 0.8206 - val_mae: 0.4460
Epoch 75/150
1/1 [==============================] - 0s 12ms/step - loss: 0.6654 - mae: 0.2908 - val_loss: 0.7981 - val_mae: 0.4421
Epoch 76/150
1/1 [==============================] - 0s 15ms/step - loss: 0.6533 - mae: 0.2973 - val_loss: 0.8112 - val_mae: 0.4716
Epoch 77/150
1/1 [==============================] - 0s 13ms/step - loss: 0.6356 - mae: 0.2959 - val_loss: 0.8038 - val_mae: 0.4804
Epoch 78/150
1/1 [==============================] - 0s 13ms/step - loss: 0.6216 - mae: 0.2982 - val_loss: 0.7491 - val_mae: 0.4415
Epoch 79/150
1/1 [==============================] - 0s 15ms/step - loss: 0.5943 - mae: 0.2867 - val_loss: 0.7332 - val_mae: 0.4404
Epoch 80/150
1/1 [==============================] - 0s 13ms/step - loss: 0.5998 - mae: 0.3071 - val_loss: 0.7452 - val_mae: 0.4659
Epoch 81/150
1/1 [==============================] - 0s 15ms/step - loss: 0.5790 - mae: 0.2997 - val_loss: 0.7583 - val_mae: 0.4922
Epoch 82/150
1/1 [==============================] - 0s 15ms/step - loss: 0.5676 - mae: 0.3015 - val_loss: 0.7206 - val_mae: 0.4670
Epoch 83/150
1/1 [==============================] - 0s 12ms/step - loss: 0.5421 - mae: 0.2886 - val_loss: 0.6909 - val_mae: 0.4489
Epoch 84/150
1/1 [==============================] - 0s 12ms/step - loss: 0.5495 - mae: 0.3074 - val_loss: 0.6830 - val_mae: 0.4506
Epoch 85/150
1/1 [==============================] - 0s 12ms/step - loss: 0.5267 - mae: 0.2944 - val_loss: 0.7002 - val_mae: 0.4761
Epoch 86/150
1/1 [==============================] - 0s 12ms/step - loss: 0.5135 - mae: 0.2893 - val_loss: 0.6862 - val_mae: 0.4700
Epoch 87/150
1/1 [==============================] - 0s 12ms/step - loss: 0.5090 - mae: 0.2929 - val_loss: 0.6562 - val_mae: 0.4480
Epoch 88/150
1/1 [==============================] - 0s 14ms/step - loss: 0.4968 - mae: 0.2886 - val_loss: 0.6451 - val_mae: 0.4439
Epoch 89/150
1/1 [==============================] - 0s 14ms/step - loss: 0.4974 - mae: 0.2961 - val_loss: 0.6546 - val_mae: 0.4589
Epoch 90/150
1/1 [==============================] - 0s 12ms/step - loss: 0.4872 - mae: 0.2915 - val_loss: 0.6483 - val_mae: 0.4584
Epoch 91/150
1/1 [==============================] - 0s 12ms/step - loss: 0.4791 - mae: 0.2893 - val_loss: 0.6233 - val_mae: 0.4396
Epoch 92/150
1/1 [==============================] - 0s 12ms/step - loss: 0.4779 - mae: 0.2943 - val_loss: 0.6201 - val_mae: 0.4421
Epoch 93/150
1/1 [==============================] - 0s 12ms/step - loss: 0.4680 - mae: 0.2899 - val_loss: 0.6399 - val_mae: 0.4671
Epoch 94/150
1/1 [==============================] - 0s 14ms/step - loss: 0.4684 - mae: 0.2956 - val_loss: 0.6189 - val_mae: 0.4515
Epoch 95/150
1/1 [==============================] - 0s 13ms/step - loss: 0.4505 - mae: 0.2832 - val_loss: 0.6062 - val_mae: 0.4436
Epoch 96/150
1/1 [==============================] - 0s 12ms/step - loss: 0.4645 - mae: 0.3019 - val_loss: 0.6062 - val_mae: 0.4477
Epoch 97/150
1/1 [==============================] - 0s 13ms/step - loss: 0.4422 - mae: 0.2836 - val_loss: 0.6213 - val_mae: 0.4661
Epoch 98/150
1/1 [==============================] - 0s 15ms/step - loss: 0.4530 - mae: 0.2978 - val_loss: 0.6153 - val_mae: 0.4633
Epoch 99/150
1/1 [==============================] - 0s 14ms/step - loss: 0.4449 - mae: 0.2929 - val_loss: 0.5915 - val_mae: 0.4425
Epoch 100/150
1/1 [==============================] - 0s 14ms/step - loss: 0.4424 - mae: 0.2934 - val_loss: 0.5908 - val_mae: 0.4438
Epoch 101/150
1/1 [==============================] - 0s 15ms/step - loss: 0.4352 - mae: 0.2883 - val_loss: 0.5983 - val_mae: 0.4534
Epoch 102/150
1/1 [==============================] - 0s 12ms/step - loss: 0.4303 - mae: 0.2854 - val_loss: 0.5979 - val_mae: 0.4552
Epoch 103/150
1/1 [==============================] - 0s 12ms/step - loss: 0.4262 - mae: 0.2835 - val_loss: 0.5898 - val_mae: 0.4494
Epoch 104/150
1/1 [==============================] - 0s 13ms/step - loss: 0.4261 - mae: 0.2857 - val_loss: 0.5827 - val_mae: 0.4448
Epoch 105/150
1/1 [==============================] - 0s 12ms/step - loss: 0.4282 - mae: 0.2902 - val_loss: 0.5860 - val_mae: 0.4503
Epoch 106/150
1/1 [==============================] - 0s 16ms/step - loss: 0.4182 - mae: 0.2825 - val_loss: 0.6038 - val_mae: 0.4699
Epoch 107/150
1/1 [==============================] - 0s 13ms/step - loss: 0.4115 - mae: 0.2775 - val_loss: 0.5757 - val_mae: 0.4435
Epoch 108/150
1/1 [==============================] - 0s 12ms/step - loss: 0.4179 - mae: 0.2857 - val_loss: 0.5679 - val_mae: 0.4369
Epoch 109/150
1/1 [==============================] - 0s 14ms/step - loss: 0.4277 - mae: 0.2967 - val_loss: 0.5921 - val_mae: 0.4621
Epoch 110/150
1/1 [==============================] - 0s 13ms/step - loss: 0.4160 - mae: 0.2859 - val_loss: 0.5943 - val_mae: 0.4654
Epoch 111/150
1/1 [==============================] - 0s 13ms/step - loss: 0.4152 - mae: 0.2863 - val_loss: 0.5759 - val_mae: 0.4487
Epoch 112/150
1/1 [==============================] - 0s 16ms/step - loss: 0.4108 - mae: 0.2835 - val_loss: 0.5624 - val_mae: 0.4370
Epoch 113/150
1/1 [==============================] - 0s 13ms/step - loss: 0.4084 - mae: 0.2830 - val_loss: 0.5693 - val_mae: 0.4451
Epoch 114/150
1/1 [==============================] - 0s 15ms/step - loss: 0.4098 - mae: 0.2856 - val_loss: 0.5687 - val_mae: 0.4458
Epoch 115/150
1/1 [==============================] - 0s 16ms/step - loss: 0.4006 - mae: 0.2777 - val_loss: 0.5686 - val_mae: 0.4461
Epoch 116/150
1/1 [==============================] - 0s 15ms/step - loss: 0.4015 - mae: 0.2790 - val_loss: 0.5711 - val_mae: 0.4488
Epoch 117/150
1/1 [==============================] - 0s 13ms/step - loss: 0.4025 - mae: 0.2803 - val_loss: 0.5704 - val_mae: 0.4489
Epoch 118/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3962 - mae: 0.2746 - val_loss: 0.5686 - val_mae: 0.4482
Epoch 119/150
1/1 [==============================] - 0s 15ms/step - loss: 0.3953 - mae: 0.2749 - val_loss: 0.5696 - val_mae: 0.4505
Epoch 120/150
1/1 [==============================] - 0s 12ms/step - loss: 0.4052 - mae: 0.2862 - val_loss: 0.5654 - val_mae: 0.4478
Epoch 121/150
1/1 [==============================] - 0s 14ms/step - loss: 0.4030 - mae: 0.2854 - val_loss: 0.5554 - val_mae: 0.4394
Epoch 122/150
1/1 [==============================] - 0s 13ms/step - loss: 0.3940 - mae: 0.2780 - val_loss: 0.5653 - val_mae: 0.4501
Epoch 123/150
1/1 [==============================] - 0s 13ms/step - loss: 0.4071 - mae: 0.2920 - val_loss: 0.5558 - val_mae: 0.4402
Epoch 124/150
1/1 [==============================] - 0s 13ms/step - loss: 0.3940 - mae: 0.2784 - val_loss: 0.5609 - val_mae: 0.4450
Epoch 125/150
1/1 [==============================] - 0s 13ms/step - loss: 0.3999 - mae: 0.2840 - val_loss: 0.5616 - val_mae: 0.4452
Epoch 126/150
1/1 [==============================] - 0s 13ms/step - loss: 0.3918 - mae: 0.2754 - val_loss: 0.5543 - val_mae: 0.4383
Epoch 127/150
1/1 [==============================] - 0s 15ms/step - loss: 0.3929 - mae: 0.2768 - val_loss: 0.5652 - val_mae: 0.4496
Epoch 128/150
1/1 [==============================] - 0s 15ms/step - loss: 0.3953 - mae: 0.2797 - val_loss: 0.5707 - val_mae: 0.4556
Epoch 129/150
1/1 [==============================] - 0s 13ms/step - loss: 0.3902 - mae: 0.2751 - val_loss: 0.5589 - val_mae: 0.4455
Epoch 130/150
1/1 [==============================] - 0s 13ms/step - loss: 0.3864 - mae: 0.2729 - val_loss: 0.5574 - val_mae: 0.4453
Epoch 131/150
1/1 [==============================] - 0s 13ms/step - loss: 0.3930 - mae: 0.2809 - val_loss: 0.5733 - val_mae: 0.4618
Epoch 132/150
1/1 [==============================] - 0s 14ms/step - loss: 0.4062 - mae: 0.2947 - val_loss: 0.5632 - val_mae: 0.4520
Epoch 133/150
1/1 [==============================] - 0s 14ms/step - loss: 0.3864 - mae: 0.2751 - val_loss: 0.5556 - val_mae: 0.4442
Epoch 134/150
1/1 [==============================] - 0s 13ms/step - loss: 0.3936 - mae: 0.2823 - val_loss: 0.5686 - val_mae: 0.4563
Epoch 135/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3979 - mae: 0.2856 - val_loss: 0.5730 - val_mae: 0.4598
Epoch 136/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3907 - mae: 0.2775 - val_loss: 0.5416 - val_mae: 0.4280
Epoch 137/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3942 - mae: 0.2807 - val_loss: 0.5599 - val_mae: 0.4461
Epoch 138/150
1/1 [==============================] - 0s 13ms/step - loss: 0.3973 - mae: 0.2835 - val_loss: 0.5692 - val_mae: 0.4568
Epoch 139/150
1/1 [==============================] - 0s 13ms/step - loss: 0.3936 - mae: 0.2812 - val_loss: 0.5497 - val_mae: 0.4391
Epoch 140/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3886 - mae: 0.2780 - val_loss: 0.5483 - val_mae: 0.4389
Epoch 141/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3864 - mae: 0.2770 - val_loss: 0.5509 - val_mae: 0.4421
Epoch 142/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3894 - mae: 0.2807 - val_loss: 0.5637 - val_mae: 0.4544
Epoch 143/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3913 - mae: 0.2820 - val_loss: 0.5575 - val_mae: 0.4472
Epoch 144/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3909 - mae: 0.2805 - val_loss: 0.5492 - val_mae: 0.4379
Epoch 145/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3876 - mae: 0.2763 - val_loss: 0.5917 - val_mae: 0.4790
Epoch 146/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3864 - mae: 0.2737 - val_loss: 0.5558 - val_mae: 0.4438
Epoch 147/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3910 - mae: 0.2790 - val_loss: 0.5417 - val_mae: 0.4311
Epoch 148/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3968 - mae: 0.2862 - val_loss: 0.5733 - val_mae: 0.4640
Epoch 149/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3989 - mae: 0.2897 - val_loss: 0.5600 - val_mae: 0.4524
Epoch 150/150
1/1 [==============================] - 0s 12ms/step - loss: 0.3848 - mae: 0.2772 - val_loss: 0.5434 - val_mae: 0.4362
In [314]:
p_predict_DL = model.predict(x_test)
3/3 [==============================] - 0s 817us/step
In [ ]:
 
In [315]:
plt.plot(y_test, color='blue')
plt.plot(p_predict_DL * std_train + mean_train, color='red')
plt.show()
In [ ]:
 
In [316]:
from sklearn.metrics import mean_absolute_error
def evaluate_model(model, x_train, y_train, x_test, y_test, mean_train, std_train):
    print('\n================== Training performance: ==================') 
    train_pred = model.predict(x_train)
    train_mae = round(mean_absolute_error(train_pred * std_train + mean_train, y_train), 2)
    print('MAE = ', train_mae)

    print('\n================== Testing performance: ==================') 
    test_pred = model.predict(x_test)
    test_mae = round(mean_absolute_error(test_pred * std_train + mean_train, y_test), 2)
    print('MAE = ', test_mae)

    return train_mae, test_mae
In [318]:
rfr_train_mae, rfr_test_mae = evaluate_model(model, x_train, y_train, x_test, y_test, mean_train, std_train)
================== Training performance: ==================
21/21 [==============================] - 0s 633us/step
MAE =  510.72

================== Testing performance: ==================
3/3 [==============================] - 0s 813us/step
MAE =  847.67

Which model performed best? the GB model or the DL model?¶

The Gradient Boosting Model performed better than the DL model